5 research outputs found

    Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals

    Full text link
    Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare, including the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets. Our study highlights the importance of considering the perspectives of both developers and clinicians in the design of XAI systems and provides practical recommendations for improving the effectiveness and usability of XAI in healthcare

    Solving the Explainable AI Conundrum: How to Bridge the Gap Between Clinicians Needs and Developers Goals

    Full text link
    Explainable AI (XAI) is considered the number one solution for overcoming implementation hurdles of AI/ML in clinical practice. However, it is still unclear how clinicians and developers interpret XAI (differently) and whether building such systems is achievable or even desirable. This longitudinal multi-method study queries (n=112) clinicians and developers as they co-developed the DCIP – an ML-based prediction system for Delayed Cerebral Ischemia. The resulting framework reveals that ambidexterity between exploration and exploitation can help bridge opposing goals and requirements to improve the design and implementation of AI/ML in healthcare

    Solving the Explainable AI Conundrum: How to Bridge the Gap Between Clinicians Needs and Developers Goals

    No full text
    Explainable AI (XAI) is considered the number one solution for overcoming implementation hurdles of AI/ML in clinical practice. However, it is still unclear how clinicians and developers interpret XAI (differently) and whether building such systems is achievable or even desirable. This longitudinal multi-method study queries (n=112) clinicians and developers as they co-developed the DCIP – an ML-based prediction system for Delayed Cerebral Ischemia. The resulting framework reveals that ambidexterity between exploration and exploitation can help bridge opposing goals and requirements to improve the design and implementation of AI/ML in healthcare

    Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals

    No full text
    Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare, including the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets. Our study highlights the importance of considering the perspectives of both developers and clinicians in the design of XAI systems and provides practical recommendations for improving the effectiveness and usability of XAI in healthcare.ISSN:2398-635

    ICU Cockpit: a platform for collecting multimodal waveform data, AI-based computational disease modeling and real-time decision support in the intensive care unit

    Full text link
    ICU Cockpit: a secure, fast, and scalable platform for collecting multimodal waveform data, online and historical data visualization, and online validation of algorithms in the intensive care unit. We present a network of software services that continuously stream waveforms from ICU beds to databases and a web-based user interface. Machine learning algorithms process the data streams and send outputs to the user interface. The architecture and capabilities of the platform are described. Since 2016, the platform has processed over 89 billion data points (N = 979 patients) from 200 signals (0.5–500 Hz) and laboratory analyses (once a day). We present an infrastructure-based framework for deploying and validating algorithms for critical care. The ICU Cockpit is a Big Data platform for critical care medicine, especially for multimodal waveform data. Uniquely, it allows algorithms to seamlessly integrate into the live data stream to produce clinical decision support and predictions in clinical practice
    corecore